HADOOP-16769. LocalDirAllocator to provide diagnostics when file creation fails#4842
HADOOP-16769. LocalDirAllocator to provide diagnostics when file creation fails#4842mehakmeet merged 4 commits intoapache:trunkfrom
Conversation
|
🎊 +1 overall
This message was automatically generated. |
|
@steveloughran - Please help in reviewing whenever you have some free cycles. You have been reviewing same JIRA earlier - #1892. Thanks. |
|
@steveloughran - Please help in reviewing the PR in your free slot. Thanks. |
steveloughran
left a comment
There was a problem hiding this comment.
looks good; one minor change.
i'm away next week, but @mehakmeet can finish the review/merge
| } catch (IOException e) { | ||
| errorText = e.getMessage(); | ||
| diskException = e; | ||
| if (LOG.isDebugEnabled()) { |
There was a problem hiding this comment.
should just use {} entry for the ctx.localDirs[] ref, let slf4j handle the toString call if needed, and you can then remove the outer debug enabled check
|
💔 -1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
|
Below UT failure is not relevant to this PR, its being tracked in different JIRA - HADOOP-18452 |
|
@mehakmeet - I have the addressed the minor change suggested by @steveloughran. Please help in finishing the review/merge |
mehakmeet
left a comment
There was a problem hiding this comment.
Looks good. Some minor comments.
One more question, The maxCapacity, and the errorText seem to be only done when the file size is known, otherwise(L409), the maxCapacity remains 0 and no errorText in the final Exception message in case file creation fails for unknown file size. Is that intended?
| Context ctx = confChanged(conf); | ||
| int numDirs = ctx.localDirs.length; | ||
| int numDirsSearched = 0; | ||
| long maxCapacity = 0; |
There was a problem hiding this comment.
comment for a better description of this variable, like "Max capacity in any directory"...
There was a problem hiding this comment.
Will add comment for this variable.
| String dir0 = buildBufferDir(ROOT, 0); | ||
| String dir1 = buildBufferDir(ROOT, 1); | ||
| conf.set(CONTEXT, dir0 + "," + dir1); | ||
| LambdaTestUtils.intercept(DiskErrorException.class, "as the max capacity in any directory is", |
There was a problem hiding this comment.
use String.format() to include the path of the file and size in the 2nd argument for the contained string message to verify if the path and size is being propagated in the error message.
There was a problem hiding this comment.
Will address in next commit.
| } | ||
|
|
||
| /** | ||
| * Test to check the LocalDirAllocation for the less space HADOOP-16769. |
There was a problem hiding this comment.
nit: Maybe cut the Hadoop Jira, and write what we are doing in the test, like "Test to verify creating files using LocalDirAllocation with file size exceeding any directory capacity"...
There was a problem hiding this comment.
Ack. will do that after this PR
There was a problem hiding this comment.
Sorry didn't get this, why not in this PR? I meant to just change the Javadocs to be more descriptive of the test below.
There was a problem hiding this comment.
My bad, I misunderstood you comment for something else. I will make the change.
d9c5443 to
4491e83
Compare
|
🎊 +1 overall
This message was automatically generated. |
mehakmeet
left a comment
There was a problem hiding this comment.
LG, Can you also provide your test process, just a rule we follow for any PR. In the case of S3A ITs, we also ask for the region of the bucket used for testing. Thanks.
| } | ||
|
|
||
| /** | ||
| * Test to check the LocalDirAllocation for the less space HADOOP-16769. |
There was a problem hiding this comment.
Sorry didn't get this, why not in this PR? I meant to just change the Javadocs to be more descriptive of the test below.
| "p1/x", Long.MAX_VALUE - 1), "Expect a DiskErrorException.", | ||
| () -> dirAllocator.getLocalPathForWrite("p1/x", Long.MAX_VALUE - 1, conf)); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Add a blank line after the last bracket.
There was a problem hiding this comment.
Ack, will make this change
Thanks. Updated the description as well |
|
💔 -1 overall
This message was automatically generated. |
c853ce3 to
ec52220
Compare
|
💔 -1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
|
💔 -1 overall
This message was automatically generated. |
|
UT failure is not related to this patch. Further tried running this test multiple times in local and its passing. |
|
@mehakmeet - I have addressed your comments. Please help in reviewing it. |
|
Merged, thanks for contributing, @ashutoshcipher can you please cherry-pick the same in branch-3.3. |
…tion fails (apache#4842) The patch provides detailed diagnostics of file creation failure in LocalDirAllocator. Contributed by: Ashutosh Gupta
|
@mehakmeet - PR for branch-3.3 #4896 |
…tion fails (apache#4842) The patch provides detailed diagnostics of file creation failure in LocalDirAllocator. Contributed by: Ashutosh Gupta
Description of PR
LocalDirAllocator to provide diagnostics when file creation fails.
JIRA - HADOOP-16769
How was this patch tested?
Added UT to test the patch.
For code changes:
LICENSE,LICENSE-binary,NOTICE-binaryfiles?